Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 54
Filter
1.
CEUR Workshop Proceedings ; 3400:93-106, 2022.
Article in English | Scopus | ID: covidwho-20240174

ABSTRACT

In the field of explainable artificial intelligence (XAI), causal models and argumentation frameworks constitute two formal approaches that provide definitions of the notion of explanation. These symbolic approaches rely on logical formalisms to reason by abduction or to search for causalities, from the formal modeling of a problem or a situation. They are designed to satisfy properties that have been established as necessary based on the study of human-human explanations. As a consequence they appear to be particularly interesting for human-machine interactions as well. In this paper, we show the equivalence between a particular type of causal models, that we call argumentative causal graphs (ACG), and argumentation frameworks. We also propose a transformation between these two systems and look at how one definition of an explanation in the argumentation theory is transposed when moving to ACG. To illustrate our proposition, we use a very simplified version of a screening agent for COVID-19. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

2.
Risks ; 11(5), 2023.
Article in English | Scopus | ID: covidwho-20235997

ABSTRACT

Predictive analytics of financial markets in developed and emerging economies during the COVID-19 regime is undeniably challenging due to unavoidable uncertainty and the profound proliferation of negative news on different platforms. Tracking the media echo is crucial to explaining and anticipating the abrupt fluctuations in financial markets. The present research attempts to propound a robust framework capable of channeling macroeconomic reflectors and essential media chatter-linked variables to draw precise forecasts of future figures for Spanish and Indian stock markets. The predictive structure combines Isometric Mapping (ISOMAP), which is a non-linear feature transformation tool, and Gradient Boosting Regression (GBR), which is an ensemble machine learning technique to perform predictive modelling. The Explainable Artificial Intelligence (XAI) is used to interpret the black-box type predictive model to infer meaningful insights. The overall results duly justify the incorporation of local and global media chatter indices in explaining the dynamics of respective financial markets. The findings imply marginally better predictability of Indian stock markets than their Spanish counterparts. The current work strives to compare and contrast the reaction of developed and developing financial markets during the COVID-19 pandemic, which has been argued to share a close resemblance to the Black Swan event when applying a robust research framework. The insights linked to the dependence of stock markets on macroeconomic indicators can be leveraged for policy formulations for augmenting household finance. © 2023 by the authors.

3.
J King Saud Univ Comput Inf Sci ; 35(7): 101596, 2023 Jul.
Article in English | MEDLINE | ID: covidwho-2328320

ABSTRACT

COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.

4.
Explainable Artificial Intelligence in Medical Decision Support Systems ; 50:1-43, 2022.
Article in English | Web of Science | ID: covidwho-2321784

ABSTRACT

The healthcare sector is very interested in machine learning (ML) and artificial intelligence (AI). Nevertheless, applying AI applications in scientific contexts is difficult due to explainability issues. Explainable AI (XAI) has been studied as a potential remedy for the problems with current AI methods. The usage of ML with XAI may be capable of both explaining models and making judgments, in contrast to AI techniques like deep learning. Computer applications called medical decision support systems (MDSS) affect the decisions doctors make regarding certain patients at a specific moment. MDSS has played a crucial role in systems' attempts to improve patient safety and the standard of care, particularly for non-communicable illnesses. They have moreover been a crucial prerequisite for effectively utilizing electronic healthcare (EHRs) data. This chapter offers a broad overview of the application of XAI in MDSS toward various infectious diseases, summarizes recent research on the use and effects of MDSS in healthcare with regard to non-communicable diseases, and offers suggestions for users to keep in mind as these systems are incorporated into healthcare systems and utilized outside of contexts for research and development.

5.
Cancers (Basel) ; 15(9)2023 Apr 25.
Article in English | MEDLINE | ID: covidwho-2319332

ABSTRACT

Worldwide, the coronavirus has intensified the management problems of health services, significantly harming patients. Some of the most affected processes have been cancer patients' prevention, diagnosis, and treatment. Breast cancer is the most affected, with more than 20 million cases and at least 10 million deaths by 2020. Various studies have been carried out to support the management of this disease globally. This paper presents a decision support strategy for health teams based on machine learning (ML) tools and explainability algorithms (XAI). The main methodological contributions are: first, the evaluation of different ML algorithms that allow classifying patients with and without cancer from the available dataset; and second, an ML methodology mixed with an XAI algorithm, which makes it possible to predict the disease and interpret the variables and how they affect the health of patients. The results show that first, the XGBoost Algorithm has a better predictive capacity, with an accuracy of 0.813 for the train data and 0.81 for the test data; and second, with the SHAP algorithm, it is possible to know the relevant variables and their level of significance in the prediction, and to quantify the impact on the clinical condition of the patients, which will allow health teams to offer early and personalized alerts for each patient.

6.
Business and Information Systems Engineering ; 2023.
Article in English | Scopus | ID: covidwho-2301782

ABSTRACT

The most promising standard machine learning methods can deliver highly accurate classification results, often outperforming standard white-box methods. However, it is hardly possible for humans to fully understand the rationale behind the black-box results, and thus, these powerful methods hamper the creation of new knowledge on the part of humans and the broader acceptance of this technology. Explainable Artificial Intelligence attempts to overcome this problem by making the results more interpretable, while Interactive Machine Learning integrates humans into the process of insight discovery. The paper builds on recent successes in combining these two cutting-edge technologies and proposes how Explanatory Interactive Machine Learning (XIL) is embedded in a generalizable Action Design Research (ADR) process – called XIL-ADR. This approach can be used to analyze data, inspect models, and iteratively improve them. The paper shows the application of this process using the diagnosis of viral pneumonia, e.g., Covid-19, as an illustrative example. By these means, the paper also illustrates how XIL-ADR can help identify shortcomings of standard machine learning projects, gain new insights on the part of the human user, and thereby can help to unlock the full potential of AI-based systems for organizations and research. © 2023, The Author(s).

7.
Heliyon ; 9(4): e15137, 2023 Apr.
Article in English | MEDLINE | ID: covidwho-2303139

ABSTRACT

The coronavirus disease (COVID-19) has continued to cause severe challenges during this unprecedented time, affecting every part of daily life in terms of health, economics, and social development. There is an increasing demand for chest X-ray (CXR) scans, as pneumonia is the primary and vital complication of COVID-19. CXR is widely used as a screening tool for lung-related diseases due to its simple and relatively inexpensive application. However, these scans require expert radiologists to interpret the results for clinical decisions, i.e., diagnosis, treatment, and prognosis. The digitalization of various sectors, including healthcare, has accelerated during the pandemic, with the use and importance of Artificial Intelligence (AI) dramatically increasing. This paper proposes a model using an Explainable Artificial Intelligence (XAI) technique to detect and interpret COVID-19 positive CXR images. We further analyze the impact of COVID-19 positive CXR images using heatmaps. The proposed model leverages transfer learning and data augmentation techniques for faster and more adequate model training. Lung segmentation is applied to enhance the model performance further. We conducted a pre-trained network comparison with the highest classification performance (F1-Score: 98%) using the ResNet model.

8.
Bioengineering (Basel) ; 10(4)2023 Mar 31.
Article in English | MEDLINE | ID: covidwho-2293082

ABSTRACT

The coronavirus pandemic emerged in early 2020 and turned out to be deadly, killing a vast number of people all around the world. Fortunately, vaccines have been discovered, and they seem effectual in controlling the severe prognosis induced by the virus. The reverse transcription-polymerase chain reaction (RT-PCR) test is the current golden standard for diagnosing different infectious diseases, including COVID-19; however, it is not always accurate. Therefore, it is extremely crucial to find an alternative diagnosis method which can support the results of the standard RT-PCR test. Hence, a decision support system has been proposed in this study that uses machine learning and deep learning techniques to predict the COVID-19 diagnosis of a patient using clinical, demographic and blood markers. The patient data used in this research were collected from two Manipal hospitals in India and a custom-made, stacked, multi-level ensemble classifier has been used to predict the COVID-19 diagnosis. Deep learning techniques such as deep neural networks (DNN) and one-dimensional convolutional networks (1D-CNN) have also been utilized. Further, explainable artificial techniques (XAI) such as Shapley additive values (SHAP), ELI5, local interpretable model explainer (LIME), and QLattice have been used to make the models more precise and understandable. Among all of the algorithms, the multi-level stacked model obtained an excellent accuracy of 96%. The precision, recall, f1-score and AUC obtained were 94%, 95%, 94% and 98% respectively. The models can be used as a decision support system for the initial screening of coronavirus patients and can also help ease the existing burden on medical infrastructure.

9.
International Journal of Cooperative Information Systems ; 31(3-4), 2022.
Article in English | Scopus | ID: covidwho-2277016

ABSTRACT

COVID-19 preventive measures have been a hindrance to millions of people over the globe not only affecting their daily routine but also affecting the mental stability. Among several preventive measures for COVID-19 spread, the lockdown is an important measure which helps considerably reduce the number of cases. The updated news about the COVID-19 is drastically spread in social media. Particularly, Twitter is widely used to share posts and opinions about the COVID-19 pandemic. Sentiment analysis (SA) on tweets can be used to determine different emotions such as anger, disgust, sadness, joy, and trust. But transparence is needed to understand how a given sentiment is evaluated with the black-box machine learning models. With this motivation, this paper presents a new explainable artificial intelligence (XAI)-based hybrid approach to analyze the sentiments of the tweets during different COVID-19 lockdowns. The proposed model attempted to understand the public's emotions during the first, second, and third lockdowns in India by analyzing tweets on social media, and demonstrates the novelty of the work. A new hybrid model is derived by integrating surrogate model and local interpretable model-agnostic explanation (LIME) model to categorize and predict different human emotions. At the same time, the Topj Similarity evaluation metric is employed to determine the similarity between the original and surrogate models. Furthermore, top words using the feature importance are identified. Finally, the overall emotions during the first, second, and third lockdowns are also estimated. For validating the enhanced outcomes of the proposed method, a series of experimental analysis was performed on the IEEE port and Twitter API dataset. The simulation results highlighted the supremacy of the proposed model with higher average precision, recall, F-score, and accuracy of 95.69%, 96.80%, 95.04%, and 96.76%, respectively. The outcome of the study reported that the public initially had a negative feeling and then started experiencing positive emotions during the third lockdown. © 2022 World Scientific Publishing Company.

10.
International Review of Financial Analysis ; 87, 2023.
Article in English | Scopus | ID: covidwho-2260555

ABSTRACT

Non Fungible Tokens (NFT) and Decentralized Finance (DeFi) assets have seen a growing media coverage and garnered considerable investor traction despite being classified as a niche in the digital financial sector. The lack of substantial research to demystify the dynamics of NFT and DeFi coins motivates the scrupulous analysis of the said sector. This work aims to critically delve into the evolutionary pattern of the NFTs and DeFis for performing predictive analytics of the same during the COVID-19 regime. The multivariate framework comprises the systematic inclusion of explanatory features embodying technical indicators, key macroeconomic indicators, and constructs linked to media hype and sentiment pertinent to the pandemic, nonlinear feature engineering, and ensemble machine learning. Isometric Mapping (ISOMAP) and Uniform Manifold Approximation and Projection (UMAP) techniques are conjugated with Gradient Boosting Regression (GBR) and Random Forest (RF) for enabling the predictive analysis. The predictive performance rationalizes the frameworks' capacity to accurately predict the prices of the majority of the NFT and DeFi coins during the ongoing financial distress period. Additionally, Explainable Artificial Intelligence (XAI) methodologies are used to comprehend the nature of the impact of the explanatory variables. Findings suggest that the daily movement of the NFTs and DeFi highly depends on their past historical movement. © 2023 The Authors

11.
Applied Sciences ; 13(5):3125, 2023.
Article in English | ProQuest Central | ID: covidwho-2252074

ABSTRACT

Kidney abnormality is one of the major concerns in modern society, and it affects millions of people around the world. To diagnose different abnormalities in human kidneys, a narrow-beam x-ray imaging procedure, computed tomography, is used, which creates cross-sectional slices of the kidneys. Several deep-learning models have been successfully applied to computer tomography images for classification and segmentation purposes. However, it has been difficult for clinicians to interpret the model's specific decisions and, thus, creating a "black box” system. Additionally, it has been difficult to integrate complex deep-learning models for internet-of-medical-things devices due to demanding training parameters and memory-resource cost. To overcome these issues, this study proposed (1) a lightweight customized convolutional neural network to detect kidney cysts, stones, and tumors and (2) understandable AI Shapely values based on the Shapley additive explanation and predictive results based on the local interpretable model-agnostic explanations to illustrate the deep-learning model. The proposed CNN model performed better than other state-of-the-art methods and obtained an accuracy of 99.52 ± 0.84% for K = 10-fold of stratified sampling. With improved results and better interpretive power, the proposed work provides clinicians with conclusive and understandable results.

12.
Comput Methods Programs Biomed ; 233: 107492, 2023 May.
Article in English | MEDLINE | ID: covidwho-2266603

ABSTRACT

BACKGROUND AND PURPOSE: COVID-19, which emerged in Wuhan (China), is one of the deadliest and fastest-spreading pandemics as of the end of 2019. According to the World Health Organization (WHO), there are more than 100 million infectious cases worldwide. Therefore, research models are crucial for managing the pandemic scenario. However, because the behavior of this epidemic is so complex and difficult to understand, an effective model must not only produce accurate predictive results but must also have a clear explanation that enables human experts to act proactively. For this reason, an innovative study has been planned to diagnose Troponin levels in the COVID-19 process with explainable white box algorithms to reach a clear explanation. METHODS: Using the pandemic data provided by Erzurum Training and Research Hospital (decision number: 2022/13-145), an interpretable explanation of Troponin data was provided in the COVID-19 process with SHApley Additive exPlanations (SHAP) algorithms. Five machine learning (ML) algorithms were developed. Model performances were determined based on training, test accuracies, precision, F1-score, recall, and AUC (Area Under the Curve) values. Feature importance was estimated according to Shapley values by applying the SHApley Additive exPlanations (SHAP) method to the model with high accuracy. The model created with Streamlit v.3.9 was integrated into the interface with the name CVD22. RESULTS: Among the five-machine learning (ML) models created with pandemic data, the best model was selected with the values of 1.0, 0.83, 0.86, 0.83, 0.80, and 0.91 in train and test accuracy, precision, F1-score, recall, and AUC values, respectively. As a result of feature selection and SHApley Additive exPlanations (SHAP) algorithms applied to the XGBoost model, it was determined that DDimer mean, mortality, CKMB (creatine kinase myocardial band), and Glucose were the features with the highest importance over the model estimation. CONCLUSIONS: Recent advances in new explainable artificial intelligence (XAI) models have successfully made it possible to predict the future using large historical datasets. Therefore, throughout the ongoing pandemic, CVD22 (https://cvd22covid.streamlitapp.com/) can be used as a guide to help authorities or medical professionals make the best decisions quickly.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , Algorithms , Fibrin Fibrinogen Degradation Products
13.
Brain Sci ; 13(3)2023 Mar 20.
Article in English | MEDLINE | ID: covidwho-2256818

ABSTRACT

Coronavirus disease (COVID-19) represents one of the greatest challenges to public health in modern history. As the disease continues to spread globally, medical and allied healthcare professionals have become one of the most affected sectors. Stress and anxiety are indirect effects of the COVID-19 pandemic. Therefore, it is paramount to understand and categorize their perceived levels of stress, as it can be a detonating factor leading to mental illness. Here, we propose a computer-based method to better understand stress in healthcare workers facing COVID-19 at the beginning of the pandemic. We based our study on a representative sample of healthcare professionals attending to COVID-19 patients in the northeast region of Mexico, at the beginning of the pandemic. We used a machine learning classification algorithm to obtain a visualization model to analyze perceived stress. The C5.0 decision tree algorithm was used to study datasets. We carried out an initial preprocessing statistical analysis for a group of 101 participants. We performed chi-square tests for all questions, individually, in order to validate stress level calculation (p < 0.05) and a calculated Cronbach's alpha of 0.94 and McDonald's omega of 0.95, demonstrating good internal consistency in the dataset. The obtained model failed to classify only 6 out of the 101, missing two cases for mild, three for moderate and one for severe (accuracy of 94.1%). We performed statistical correlation analysis to ensure integrity of the method. In addition, based on the decision tree model, we concluded that severe stress cases can be related mostly to high levels of xenophobia and compulsive stress. Thus, showing that applied machine learning algorithms represent valuable tools in the assessment of perceived stress, which can potentially be adapted to other areas of the medical field.

14.
Eur J Radiol ; 157: 110592, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2261340

ABSTRACT

OBJECTIVES: This study aims to contribute to an understanding of the explainability of computer aided diagnosis studies in radiology that use end-to-end deep learning by providing a quantitative overview of methodological choices and by discussing the implications of these choices for their explainability. METHODS: A systematic review was executed using the preferred reporting items for systemic reviews and meta-analysis guidelines. Primary diagnostic test accuracy studies using end-to-end deep learning for radiology were identified from the period January 1st, 2016, to January 20th, 2021. Results were synthesized by identifying the explanation goals, measures, and explainable AI techniques. RESULTS: This study identified 490 primary diagnostic test accuracy studies using end-to-end deep learning for radiology, of which 179 (37%) used explainable AI. In 147 out of 179 (82%) of studies, explainable AI was used for the goal of model visualization and inspection. Class activation mapping is the most common technique, being used in 117 out of 179 studies (65%). Only 1 study used measures to evaluate the outcome of their explainable AI. CONCLUSIONS: A considerable portion of computer aided diagnosis studies provide a form of explainability of their deep learning models for the purpose of model visualization and inspection. The techniques commonly chosen by these studies (class activation mapping, feature activation mapping and t-distributed stochastic neighbor embedding) have potential limitations. Because researchers generally do not measure the quality of their explanations, we are agnostic about how effective these explanations are at addressing the black box issues of deep learning in radiology.


Subject(s)
Deep Learning , Radiology , Humans , Computers , Diagnosis, Computer-Assisted , Radiography
15.
Computer Systems Science and Engineering ; 46(1):209-224, 2023.
Article in English | Scopus | ID: covidwho-2239025

ABSTRACT

Recent advancements in the Internet of Things (Io), 5G networks, and cloud computing (CC) have led to the development of Human-centric IoT (HIoT) applications that transform human physical monitoring based on machine monitoring. The HIoT systems find use in several applications such as smart cities, healthcare, transportation, etc. Besides, the HIoT system and explainable artificial intelligence (XAI) tools can be deployed in the healthcare sector for effective decision-making. The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage. This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification (QIDEXAI-CDC) model for HIoT systems. The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems. The QIDEXAI-CDC model primarily uses bilateral filtering (BF) as a preprocessing tool to eradicate the noise. In addition, RetinaNet is applied for the generation of useful feature vectors from radiological images. For COVID-19 detection and classification, quantum-inspired differential evolution (QIDE) with kernel extreme learning machine (KELM) model is utilized. The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model. In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model, a wide range of simulations was carried out. Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches. © 2023 Authors. All rights reserved.

16.
Prog Biophys Mol Biol ; 179: 1-9, 2023 05.
Article in English | MEDLINE | ID: covidwho-2245029

ABSTRACT

This study systematically reviews the Artificial Intelligence (AI) methods developed to resolve the critical process of COVID-19 gene data analysis, including diagnosis, prognosis, biomarker discovery, drug responsiveness, and vaccine efficacy. This systematic review follows the guidelines of Preferred Reporting for Systematic Reviews and Meta-Analyses (PRISMA). We searched PubMed, Embase, Web of Science, and Scopus databases to identify the relevant articles from January 2020 to June 2022. It includes the published studies of AI-based COVID-19 gene modeling extracted through relevant keyword searches in academic databases. This study included 48 articles discussing AI-based genetic studies for several objectives. Ten articles confer about the COVID-19 gene modeling with computational tools, and five articles evaluated ML-based diagnosis with observed accuracy of 97% on SARS-CoV-2 classification. Gene-based prognosis study reviewed three articles and found host biomarkers detecting COVID-19 progression with 90% accuracy. Twelve manuscripts reviewed the prediction models with various genome analysis studies, nine articles examined the gene-based in silico drug discovery, and another nine investigated the AI-based vaccine development models. This study compiled the novel coronavirus gene biomarkers and targeted drugs identified through ML approaches from published clinical studies. This review provided sufficient evidence to delineate the potential of AI in analyzing complex gene information for COVID-19 modeling on multiple aspects like diagnosis, drug discovery, and disease dynamics. AI models entrenched a substantial positive impact by enhancing the efficiency of the healthcare system during the COVID-19 pandemic.


Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , Artificial Intelligence , SARS-CoV-2/genetics , Pandemics/prevention & control
17.
International Journal of Cooperative Information Systems ; 2023.
Article in English | Scopus | ID: covidwho-2234555

ABSTRACT

COVID-19 preventive measures have been a hindrance to millions of people over the globe not only affecting their daily routine but also affecting the mental stability. Among several preventive measures for COVID-19 spread, the lockdown is an important measure which helps considerably reduce the number of cases. The updated news about the COVID-19 is drastically spread in social media. Particularly, Twitter is widely used to share posts and opinions about the COVID-19 pandemic. Sentiment analysis (SA) on tweets can be used to determine different emotions such as anger, disgust, sadness, joy, and trust. But transparence is needed to understand how a given sentiment is evaluated with the black-box machine learning models. With this motivation, this paper presents a new explainable artificial intelligence (XAI)-based hybrid approach to analyze the sentiments of the tweets during different COVID-19 lockdowns. The proposed model attempted to understand the public's emotions during the first, second, and third lockdowns in India by analyzing tweets on social media, and demonstrates the novelty of the work. A new hybrid model is derived by integrating surrogate model and local interpretable model-agnostic explanation (LIME) model to categorize and predict different human emotions. At the same time, the TopjSimilarity evaluation metric is employed to determine the similarity between the original and surrogate models. Furthermore, top words using the feature importance are identified. Finally, the overall emotions during the first, second, and third lockdowns are also estimated. For validating the enhanced outcomes of the proposed method, a series of experimental analysis was performed on the IEEE port and Twitter API dataset. The simulation results highlighted the supremacy of the proposed model with higher average precision, recall, F-score, and accuracy of 95.69%, 96.80%, 95.04%, and 96.76%, respectively. The outcome of the study reported that the public initially had a negative feeling and then started experiencing positive emotions during the third lockdown. © 2022 World Scientific Publishing Company.

18.
International Review of Financial Analysis ; : 102558, 2023.
Article in English | ScienceDirect | ID: covidwho-2220835

ABSTRACT

Non Fungible Tokens (NFT) and Decentralized Finance (DeFi) assets have seen a growing media coverage and garnered considerable investor traction despite being classified as a niche in the digital financial sector. The lack of substantial research to demystify the dynamics of NFT and DeFi coins motivates the scrupulous analysis of the said sector. This work aims to critically delve into the evolutionary pattern of the NFTs and DeFis for performing predictive analytics of the same during the COVID-19 regime. The multivariate framework comprises the systematic inclusion of explanatory features embodying technical indicators, key macroeconomic indicators, and constructs linked to media hype and sentiment pertinent to the pandemic, nonlinear feature engineering, and ensemble machine learning. Isometric Mapping (ISOMAP) and Uniform Manifold Approximation and Projection (UMAP) techniques are conjugated with Gradient Boosting Regression (GBR) and Random Forest (RF) for enabling the predictive analysis. The predictive performance rationalizes the frameworks' capacity to accurately predict the prices of the majority of the NFT and DeFi coins during the ongoing financial distress period. Additionally, Explainable Artificial Intelligence (XAI) methodologies are used to comprehend the nature of the impact of the explanatory variables. Findings suggest that the daily movement of the NFTs and DeFi highly depends on their past historical movement.

19.
Comput Biol Med ; 154: 106619, 2023 03.
Article in English | MEDLINE | ID: covidwho-2220589

ABSTRACT

AIM: COVID-19 has revealed the need for fast and reliable methods to assist clinicians in diagnosing the disease. This article presents a model that applies explainable artificial intelligence (XAI) methods based on machine learning techniques on COVID-19 metagenomic next-generation sequencing (mNGS) samples. METHODS: In the data set used in the study, there are 15,979 gene expressions of 234 patients with COVID-19 negative 141 (60.3%) and COVID-19 positive 93 (39.7%). The least absolute shrinkage and selection operator (LASSO) method was applied to select genes associated with COVID-19. Support Vector Machine - Synthetic Minority Oversampling Technique (SVM-SMOTE) method was used to handle the class imbalance problem. Logistics regression (LR), SVM, random forest (RF), and extreme gradient boosting (XGBoost) methods were constructed to predict COVID-19. An explainable approach based on local interpretable model-agnostic explanations (LIME) and SHAPley Additive exPlanations (SHAP) methods was applied to determine COVID-19- associated biomarker candidate genes and improve the final model's interpretability. RESULTS: For the diagnosis of COVID-19, the XGBoost (accuracy: 0.930) model outperformed the RF (accuracy: 0.912), SVM (accuracy: 0.877), and LR (accuracy: 0.912) models. As a result of the SHAP, the three most important genes associated with COVID-19 were IFI27, LGR6, and FAM83A. The results of LIME showed that especially the high level of IFI27 gene expression contributed to increasing the probability of positive class. CONCLUSIONS: The proposed model (XGBoost) was able to predict COVID-19 successfully. The results show that machine learning combined with LIME and SHAP can explain the biomarker prediction for COVID-19 and provide clinicians with an intuitive understanding and interpretability of the impact of risk factors in the model.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , COVID-19/diagnosis , COVID-19/genetics , Genetic Markers , Risk Factors , Neoplasm Proteins
20.
2022 IEEE International Conference on E-health Networking, Application and Services, HealthCom 2022 ; : 246-251, 2022.
Article in English | Scopus | ID: covidwho-2213190

ABSTRACT

In the current era of big data, very large amounts of data are generating at a rapid rate from a wide variety of rich data sources. Electronic health (e-health) records are examples of the big data. With the technological advancements, more healthcare practice has gradually been supported by electronic processes and communication. This enables health informatics, in which computer science meets the healthcare sector to address healthcare and medical problems. Embedded in the big data are valuable information and knowledge that can be discovered by data science, data mining and machine learning techniques. Many of these techniques apply "opaque box"approaches to make accurate predictions. However, these techniques may not be crystal clear to the users. As the users not necessarily be able to clearly view the entire knowledge discovery (e.g., prediction) process, they may not easily trust the discovered knowledge (e.g., predictions). Hence, in this paper, we present a system for providing trustworthy explanations for knowledge discovered from e-health records. Specifically, our system provides users with global explanations for the important features among the records. It also provides users with local explanations for a particular record. Evaluation results on real-life e-health records show the practicality of our system in providing trustworthy explanations to knowledge discovered (e.g., accurate predictions made). © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL